Using Windows System Monitor
Microsoft Windows
server operating systems offer the System Monitor graphical tool
(formerly known as Performance Monitor [PerfMon]) to measure the
performance of all aspects of the server. You can use System Monitor to
view SQL Server and other applications from a coarse perspective,
looking at events from a high level. You can view SQL Server objects and
performance counters by using the System Monitor tool. You can also use
it to gather information pertaining to the behavior of processors,
memory, cache, and other object resources on the server.
Each object you monitor
by using System Monitor has an associated set of counters. These
counters are put in place when software is installed in the Windows
environment. You can use them to measure aspects of an object, such as
device usage, queue lengths, delays, and other factors. In general, when
looking at information on how the server as a whole is operating and
how application are performing in respect to the server’s performance,
the System Monitor provides you with optimum benefits.
When you install SQL
Server and its related components, several sets of counters are placed
in the environment and made available for monitoring. The general list
of objects available to be monitored depends on which SQL components
have been installed. The following prefixes identify the objects you can
monitor in SQL Server:
.NET CLR— Use this prefix to access the common language runtime.
MSAS 2005— Use this prefix to access Microsoft Analysis Services.
MSFTESQL— Use this prefix to access the Microsoft Full-Text engine.
MSRS 2005— Use this prefix to access Microsoft Reporting Services.
MSSQL$InstanceName— Use this prefix to access a named instance of the database engine.
SQLAgent$InstanceName— Use this prefix to access a named instance of the SQL Server Agent.
SQLAgent— Use this prefix to access the default instance of the SQL Server Agent.
SQLServer— Use this prefix to access the default instance of the database engine.
Each of the software
categories has several related objects, and within each object is a set
of one or more counters. Depending on the information you want to
gather, you need to select the objects in the software prefix you are
looking for and then select one or more counters from that object’s
list. Each of the software categories has some counters that may be of
interest.
Table 1. Common System Monitor Counters
| | |
---|
Counter | What It Monitors | Most Commonly Accessed Counts |
---|
.NET CLR:Data | Specifics about data connections | Current connections, pooled connections, failed connections |
.NET CLR Exceptions | Specifics about exception instances | Exceptions thrown |
.NET CLR:Networking | Specifics about data flow through the network | Bytes sent, bytes received |
MSAS 2005:Cache | Particulars about the Analysis Services cache | Lookups, hits, misses |
MSAS 2005:Memory | Particulars about the memory utilization of Analysis Services | Memory usage |
MSAS 2005: Proactive Caching | Particulars about Analysis Services proactive caching use | Notifications |
MSFTESQL:Catalogs | Facts about text index catalogs | State
|
SQLAgent:Jobs | Data about jobs executed under agent control | Active jobs, failed jobs, queued jobs |
SQLAgent:JobSteps | Data about individual job steps | Active steps, queued steps |
SQLAgent:Alerts | Data about alerts fired | Activated alerts |
SQLServer:Locks | Information about lock requests made | Lock time-outs, deadlocks per second |
SQLServer: Plan Cache | Information about the SQL Server cache | Hit ratio, objects in use |
SQLServer: Buffer Manager | Information about memory buffers and their use | Buffer cache hit, lazy writes, read-aheads |
SQLServer: SQL Statistics | Information about aspects of T-SQL queries | Batch requests |
SQLServer: General Statistics | Information about general serverwide activity | Users connected, active temporary tables, logins |
SQLServer: Databases | Information about a SQL Server database | Free log space, number of transactions, transactions per second |
As you begin any of these
processes, you need to begin with the hardware and operating system and
then proceed into the application server. As you get further into data
gathering and analysis, you should look into each database and the
interactions between the data and the user applications. To view SQL
Server tasks in detail, after the initial data-gathering processes, you
use the SQL Server Profiler to develop a more complete picture.
Using SQL Server Profiler
The
SQL Profiler tool is a graphical mechanism that enables you to monitor
SQL Server events. This tool enables you to capture and save data about
every event on a server. The data can be stored to a file or SQL Server
table. You can later analyze stored data replay events. This allows for
the scheduling of the Profiler or other execution for deferred analysis,
when time permits.
You should use SQL
Profiler to monitor only the events of interest. Monitoring all events
on the server produces so much data that it can become overwhelming. You
can filter large traces so that you view only the information you want
to see, such as a single database or single user. You can use filters to
view a subset of the event data captured. Monitoring too many events
also adds excessive overhead to the server. This overhead can slow the
monitoring process and cause the large amounts of output from the trace
to create a file or table that is large. This is particularly important
when you are performing monitoring over long periods.
It is a good idea not to
run the SQL Server Profiler on the same server you are monitoring.
Running the Profiler uses considerable resources, and that can
noticeably affect the server’s performance. Instead, you should run
Profiler on another server or workstation that can act as a monitoring
machine and have all data collected there.
SQL Profiler is a useful tool that you can use to do the following:
Monitor the performance of SQL Server.
Debug T-SQL statements and stored procedures.
Identify long-running queries.
Step through procedures to ensure that they are working as expected.
Capture events on a production system and replay them on a test system.
Diagnose problem situations by capturing event data.
Audit and review events.
In
troubleshooting the SQL Server environment, you typically use SQL
Profiler. You can best use this tool to find queries that are not
performing well or ones that are executing for long periods. Profiler is
also useful in identifying causes of data blocking and deadlock
situations. In monitoring a healthy server, you generally use SQL
Profiler to monitor performance and to audit application, user,
database, and job activity.
Caution
Caution!
Profiler can affect overall database performance, sometimes
significantly, so you should use it sparingly and remember to turn it
off as soon as it has captured the necessary information.
Before you start using Profiler, you should become familiar with profile templates. A template
defines the criteria for each event you want to monitor with SQL
Profiler. You can use predefined templates for individual circumstances,
and you can create your own templates as well, specifying the events,
data columns, and filters to use. You can save a template, and at any
time, you can load a trace and then start it with the template settings.
To help identify long-running queries, you use Profiler’s Create Trace
Wizard to run the TSQL_Duration
template. You can specify the length of the long-running queries you are
trying to identify and then have them recorded in a log.
SQL Profiler captures data
by using a trace based on the selected events, data columns, and
filters. The trace is the basis for all data collected; you can define a
trace on an ad hoc basis, draw a trace from a template, or a
combination of the two. Even after you have defined the data collected,
you can apply filters to the data after it is captured to focus on the
type of information you want. You might therefore want to save traces
even after you are finished with the current activity. You can possibly
apply a past trace to various circumstances.
At times, when monitoring
with Profiler, you may find the amount of data provided to be
considerable and possibly overwhelming. It can be difficult to find what
you are looking for in a trace that covers a broad range of events. A
useful technique that can ease this process is to write a trace to a SQL
Server table and then query the table from within the Query Analyzer.
If you know what you are looking for, this method can greatly speed up
finding the data in the trace you need.
Profiler can store captured
data in a text file or in a table. If you decide to store the data in a
SQL Server table, you should not store it in a database you are
profiling, and, if possible, not even on the same server because it
could affect the performance of the server you are profiling. Instead,
you should store the trace data in a database on another server. After
data is captured (which is the easy part of the process), you must sift
through the data collected to draw some meaning from the results.
Profiler can use many
existing templates to gather information for various types of
circumstances. You might want to select some of them to see what
information is being gathered. After a trace is created, it is
permanently stored until it is deleted. You can start a trace again by
name through the Profiler interface or via a stored procedure.
To
use the output generated by a trace, you must first determine what type
of data you are most interested in from the trace. The next section
describes how to get the most valuable information from the trace
results.
Defining a Profiler Trace
When using SQL Profiler to
define a trace, you use event categories to select the events to
monitor. Event categories are grouped into classes of events, and you
can select the following classes:
Broker— Finds events produced by the Service Broker.
CLR— Finds events produced through the loading of assemblies.
Cursors— Finds events produced by the use of cursors.
Database— Finds events produced when files grow or shrink automatically.
Deprecation— Finds events produced by elements of SQL Server that are to be removed in a future release.
Errors and Warnings— Finds events produced when an error or warning occurs.
Full Text— Finds events produced by use of full-text indexing and full-text queries.
Locks— Find events produced when a lock is acquired or other lock activity occurs.
OLEDB— Finds events produced through calls from an OLE DB interface.
Objects— Finds events that occur as objects are created, opened, closed, or deleted.
Performance— Finds events produced when SQL data manipulations execute.
Progress Report —Finds events produced through online index operations.
Scans— Finds events produced when tables and indexes are scanned.
Server— Finds general server events produced.
Security Audit— Finds events used to audit server activity.
Sessions— Finds events produced by clients connecting and disconnecting.
Stored Procedures— Finds events produced by the execution of procedures.
Transactions—
Finds events produced by the execution of Microsoft Distributed
Transaction Coordinator transactions or through writing to the
transaction log.
TSQL— Finds events produced by the execution of T-SQL statements.
User Configurable— Finds user-configurable events.
Each event class can monitor
various objects. To select any of the objects when defining a trace,
you use the Events tab of the Trace Properties dialog box. You add and
remove objects, not whole classes, although you can trace a whole class
of objects, if desired.
Using Profiler Traces to Diagnose Locking
SQL Profiler provides the Locks
event classes to monitor locking behavior during trace operations.
Several of these classes are useful in monitoring locking, blocking, and
deadlocking situations on a server:
Lock:Acquired— This event fires to show the acquisition of a resource lock.
Lock:Cancel—
With this class, an event is fired when a lock on a resource has been
cancelled. SQL Server can cancel a lock because of a deadlock or through
a programmatic process cancellation.
Lock:Deadlock—
With this class, a deadlock occurs if two concurrent transactions have
deadlocked each other by trying to obtain locks on resources that the
other owns.
Lock:Deadlock Chain— With this class, you can see the chain of events produced for each of the processes leading up to a deadlock situation.
Lock:Escalation— With this class, an event fires when the server determines that a lock should be converted to a larger scope.
Lock:Released —With this class, an event fires when a resource lock is released.
Lock:Timeout— This event fires when a lock request has timed out because another process is blocking a resource with its own lock.
You use Lock:Acquired and Lock:Released
to monitor the timing of lock behavior. These events indicate the type
of lock and the length of time the lock was held. Often, a redesign of
the application that is setting the locks in place can lessen the lock
duration considerably.
You use Lock:Deadlock, Lock:Deadlock Chain, and Lock:Timeout
to monitor deadlock and timeout situations. This information is useful
in determining whether deadlocks and timeouts are affecting the user or
application.
The SQL Profiler Results window is segmented into two view panes. If you have included TextData
as one of the columns in your definition, the bottom pane shows SQL
statement information. The top pane illustrates the current trace data
view where event information is displayed, based on current filter
settings.
After you have collected
trace event data, you can save the trace to replay it later. The SQL
Profiler Playback feature is powerful but carries a little overhead. It
is well worth considering having a test machine available to act as a
playback and troubleshooting server.
Playback of events is
accomplished through the SQL Server multithreaded playback engine. This
engine can simulate user connections and SQL Server authentication. The
event data can be played back to reproduce the activity captured in the
trace. Replay can be useful in troubleshooting an application or another
process problem.
Trace Playback and Diagnosis
After you have
identified a problem and implemented corrections, you should run the
trace that was originally collected against the corrected application or
process to see whether the proposed solution accomplishes the desired
effect.
Replaying
the original trace can be useful in designing solutions. The trace
replay feature has advanced debugging support, enabling you to make use
of break points and run-to-cursor features.
When the target computer
is going to be a computer other than the computer originally traced, you
must ensure that the database IDs on the target are the same as those
on the source. You can accomplish this by creating (from the source) a
backup of the master database as well as
any user databases referenced in the trace and restoring them on the
target. In this manner, you can use a test server as a debugging server
for any multiple-application environment.
The default database for
each login contained in a trace must be set on the target. The default
database of the trace activity login must be set to the database that
matches that login name, even in cases in which the database name might
be different. To set the default database of the login, you use the sp_defaultdb system stored procedure.
You have the option of
replaying the events in the order in which they were traced. This option
enables debugging, which means you can implement debugging techniques
such as stepping through the trace. You can replay the events using
multiple threads to optimize performance; however, this disables
debugging. The default option is to display the results of the replay.
If the trace you want to replay is a large capture, you might want to
disable this option to save disk space.
Using Profiler to Gather a Workload
One of the most common
uses of Profiler is to collect an event’s workload sampling for tuning
the physical database. You can analyze the database design from a
workload file by using the DTA. SQL Server Profiler provides a
predefined Tuning template that gathers the appropriate T-SQL events in the trace output so it can be used as a workload for the DTA.
The DTA can use a
workload file generated by Profiler to analyze performance effects.
After analyzing the effects of a workload on a databases, DTA provides
recommendations to add, remove, or modify physical design structures.
These physical performance structures include clustered indexes,
nonclustered indexes, indexed views, and partitioning.